61 research outputs found

    Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    Get PDF
    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons

    Pharmacological prion protein silencing accelerates central nervous system autoimmune disease via T cell receptor signalling

    Get PDF
    The primary biological function of the endogenous cellular prion protein has remained unclear. We investigated its biological function in the generation of cellular immune responses using cellular prion protein gene-specific small interfering ribonucleic acid in vivo and in vitro. Our results were confirmed by blocking cellular prion protein with monovalent antibodies and by using cellular prion protein-deficient and -transgenic mice. In vivo prion protein gene-small interfering ribonucleic acid treatment effects were of limited duration, restricted to secondary lymphoid organs and resulted in a 70% reduction of cellular prion protein expression in leukocytes. Disruption of cellular prion protein signalling augmented antigen-specific activation and proliferation, and enhanced T cell receptor signalling, resulting in zeta-chain-associated protein-70 phosphorylation and nuclear factor of activated T cells/activator protein 1 transcriptional activity. In vivo prion protein gene-small interfering ribonucleic acid treatment promoted T cell differentiation towards pro-inflammatory phenotypes and increased survival of antigen-specific T cells. Cellular prion protein silencing with small interfering ribonucleic acid also resulted in the worsening of actively induced and adoptively transferred experimental autoimmune encephalomyelitis. Finally, treatment of myelin basic protein1–11 T cell receptor transgenic mice with prion protein gene-small interfering ribonucleic acid resulted in spontaneous experimental autoimmune encephalomyelitis. Thus, central nervous system autoimmune disease was modulated at all stages of disease: the generation of the T cell effector response, the elicitation of T effector function and the perpetuation of cellular immune responses. Our findings indicate that cellular prion protein regulates T cell receptor-mediated T cell activation, differentiation and survival. Defects in autoimmunity are restricted to the immune system and not the central nervous system. Our data identify cellular prion protein as a regulator of cellular immunological homoeostasis and suggest cellular prion protein as a novel potential target for therapeutic immunomodulation

    Hebbian Learning of Bayes Optimal Decisions

    No full text
    Uncertainty is omnipresent when we perceive or interact with our environment, and the Bayesian framework provides computational methods for dealing with it. Mathematical models for Bayesian decision making typically require datastructures that are hard to implement in neural networks. This article shows that even the simplest and experimentally best supported type of synaptic plasticity, Hebbian learning, in combination with a sparse, redundant neural code, can in principle learn to infer optimal Bayesian decisions. We present a concrete Hebbian learning rule operating on log-probability ratios. Modulated by reward-signals, this Hebbian plasticity rule also provides a new perspective for understanding how Bayesian inference could support fast reinforcement learning in the brain. In particular we show that recent experimental results by Yang and Shadlen [1] on reinforcement learning of probabilistic inference in primates can be modeled in this way.

    STDP installs in Winner-Take-All circuits an online approximation to hidden Markov model learning.

    No full text
    In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task
    corecore